Avoiding Overfitting: A Survey on Regularization Methods for Convolutional Neural Networks
نویسندگان
چکیده
Several image processing tasks, such as classification and object detection, have been significantly improved using Convolutional Neural Networks (CNN). Like ResNet EfficientNet, many architectures achieved outstanding results in at least one dataset by the time of their creation. A critical factor training concerns network's regularization, which prevents structure from overfitting. This work analyzes several regularization methods developed last few years, showing significant improvements for different CNN models. The works are classified into three main areas: first is called "data augmentation", where all techniques focus on performing changes input data. second, named "internal changes", aims to describe procedures modify feature maps generated neural network or kernels. one, "label", transforming labels a given input. presents two differences comparing other available surveys about regularization: (i) papers gathered manuscript, not older than five (ii) second distinction reproducibility, i.e., refered here code public repositories they directly implemented some framework, TensorFlow Torch.
منابع مشابه
Convolutional neural networks with low-rank regularization
Large CNNs have delivered impressive performance in various computer vision applications. But the storage and computation requirements make it problematic for deploying these models on mobile devices. Recently, tensor decompositions have been used for speeding up CNNs. In this paper, we further develop the tensor decomposition technique. We propose a new algorithm for computing the low-rank ten...
متن کاملMax-Pooling Dropout for Regularization of Convolutional Neural Networks
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advoc...
متن کاملStochastic Pooling for Regularization of Deep Convolutional Neural Networks
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined wi...
متن کاملComparison of Regularization Methods for ImageNet Classification with Deep Convolutional Neural Networks
Large and Deep Convolutional Neural Networks achieve good results in image classification tasks, but they need methods to prevent overfitting. In this paper we compare performance of different regularization techniques on ImageNet Large Scale Visual Recognition Challenge 2013. We show empirically that Dropout works better than DropConnect on ImageNet dataset. © 2013 Published by Elsevier B.V. S...
متن کاملDropAll: Generalization of Two Convolutional Neural Network Regularization Methods
We introduce DropAll, a generalization of DropOut [1] and DropConnect [2], for regularization of fully-connected layers within convolutional neural networks. Applying these methods amounts to subsampling a neural network by dropping units. Training with DropOut, a randomly selected subset of activations are dropped, when training with DropConnect we drop a randomly subsets of weights. With Drop...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ACM Computing Surveys
سال: 2022
ISSN: ['0360-0300', '1557-7341']
DOI: https://doi.org/10.1145/3510413